skip to main content
US FlagAn official website of the United States government
dot gov icon
Official websites use .gov
A .gov website belongs to an official government organization in the United States.
https lock icon
Secure .gov websites use HTTPS
A lock ( lock ) or https:// means you've safely connected to the .gov website. Share sensitive information only on official, secure websites.


Search for: All records

Creators/Authors contains: "Katsaggelos, Aggelos K"

Note: When clicking on a Digital Object Identifier (DOI) number, you will be taken to an external site maintained by the publisher. Some full text articles may not yet be available without a charge during the embargo (administrative interval).
What is a DOI Number?

Some links on this page may take you to non-federal websites. Their policies may differ from this site.

  1. The first successful detection of gravitational waves by ground-based observatories, such as the Laser Interferometer Gravitational-Wave Observatory (LIGO), marked a breakthrough in our comprehension of the Universe. However, due to the unprecedented sensitivity required to make such observations, gravitational-wave detectors also capture disruptive noise sources called glitches, which can potentially be confused for or mask gravitational-wave signals. To address this problem, a community-science project, Gravity Spy, incorporates human insight and machine learning to classify glitches in LIGO data. The machine-learning classifier, integrated into the project since 2017, has evolved over time to accommodate increasing numbers of glitch classes. Despite its success, limitations have arisen in the ongoing LIGO fourth observing run (O4) due to the architecture's simplicity, which led to poor generalization and inability to handle multi-time window inputs effectively. We propose an advanced classifier for O4 glitches. Using data from previous observing runs, we evaluate different fusion strategies for multi-time window inputs, using label smoothing to counter noisy labels, and enhancing interpretability through attention module-generated weights. Our new O4 classifier shows improved performance, and will enhance glitch classification, aiding in the ongoing exploration of gravitational-wave phenomena. 
    more » « less
    Free, publicly-accessible full text available July 29, 2026
  2. We explore the bi-directional relationship between human and machine learning in citizen science. Theoretically, the study draws on the zone of proximal development (ZPD) concept, which allows us to describe AI augmentation of human learning, human augmentation of machine learning, and how tasks can be designed to facilitate co-learning. The study takes a design-science approach to explore the design, deployment, and evaluations of the Gravity Spy citizen science project. The findings highlight the challenges and opportunities of co-learning, where both humans and machines contribute to each other’s learning and capabilities. The study takes its point of departure in the literature on co-learning and develops a framework for designing projects where humans and machines mutually enhance each other’s learning. The research contributes to the existing literature by developing a dynamic approach to human-AI augmentation, by emphasizing that the ZPD supports ongoing learning for volunteers and keeps machine learning aligned with evolving data. The approach offers potential benefits for project scalability, participant engagement, and automation considerations while acknowledging the importance of tutorials, community access, and expert involvement in supporting learning. 
    more » « less
  3. Abstract The relationship of human brain structure to cognitive function is complex, and how this relationship differs between childhood and adulthood is poorly understood. One strong hypothesis suggests the cognitive function of Fluid Intelligence (Gf) is dependent on prefrontal cortex and parietal cortex. In this work, we developed a novel graph convolutional neural networks (gCNNs) for the analysis of localized anatomic shape and prediction of Gf. Morphologic information of the cortical ribbons and subcortical structures was extracted from T1-weighted MRIs within two independent cohorts, the Adolescent Brain Cognitive Development Study (ABCD; age: 9.93 ± 0.62 years) of children and the Human Connectome Project (HCP; age: 28.81 ± 3.70 years). Prediction combining cortical and subcortical surfaces together yielded the highest accuracy of Gf for both ABCD (R = 0.314) and HCP datasets (R = 0.454), outperforming the state-of-the-art prediction of Gf from any other brain measures in the literature. Across both datasets, the morphology of the amygdala, hippocampus, and nucleus accumbens, along with temporal, parietal and cingulate cortex consistently drove the prediction of Gf, suggesting a significant reframing of the relationship between brain morphology and Gf to include systems involved with reward/aversion processing, judgment and decision-making, motivation, and emotion. 
    more » « less
  4. This data set contains the individual classifications that the Gravity Spy citizen science volunteers made for glitches through 20 July 2024. Classifications made by science team members or in testing workflows have been removed as have classifications of glitches lacking a Gravity Spy identifier. See Zevin et al. (2017) for an explanation of the citizen science task and classification interface. Data about glitches with machine-learning labels are provided in an earlier data release (Glanzer et al., 2021). Final classifications combining ML and volunteer classifications are provided in Zevin et al. (2022).  22 of the classification labels match the labels used in the earlier data release, namely 1080Lines, 1400Ripples, Air_Compressor, Blip, Chirp, Extremely_Loud, Helix, Koi_Fish, Light_Modulation, Low_Frequency_Burst, Low_Frequency_Lines, No_Glitch, None_of_the_Above, Paired_Doves, Power_Line, Repeating_Blips, Scattered_Light, Scratchy, Tomte, Violin_Mode, Wandering_Line and Whistle. One glitch class that was added to the machine-learning classification has not been added to the Zooniverse project and so does not appear in this file, namely Blip_Low_Frequency. Four classes were added to the citizen science platform but not to the machine learning model and so have only volunteer labels, namely 70HZLINE, HIGHFREQUENCYBURST, LOWFREQUENCYBLIP and PIZZICATO. The glitch class Fast_Scattering added to the machine-learning classification has an equivalent volunteer label CROWN, which is used here (Soni et al. 2021). Glitches are presented to volunteers in a succession of workflows. Workflows include glitches classified by a machine learning classifier as being likely to be in a subset of classes and offer the option to classify only those classes plus None_of_the_Above. Each level includes the classes available in lower levels. The top level does not add new classification options but includes all glitches, including those for which the machine learning model is uncertain of the class. As the classes available to the volunteers change depending on the workflow, a glitch might be classified as None_of_the_Above in a lower workflow and subsequently as a different class in a higher workflow. Workflows and available classes are shown in the table below.  Workflow ID Name Number of glitch classes Glitches added 1610  Level 1 3 Blip, Whistle, None_of_the_Above 1934 Level 2 6 Koi_Fish, Power_Line, Violin_Mode 1935 Level 3 10 Chirp, Low_Frequency_Burst, No_Glitch, Scattered_Light 2360 Original level 4 22 1080Lines, 1400Ripples, Air_Compressor, Extremely_Loud, Helix, Light_Modulation, Low_Frequency_Lines, Paired_Doves, Repeating_Blips, Scratchy, Tomte, Wandering_Line 7765 New level 4 15 1080Lines, Extremely_Loud, Low_Frequency_Lines, Repeating_Blips, Scratchy 2117 Original level 5 22 No new glitch classes 7766 New level 5 27 1400Ripples, Air_Compressor, Paired_Doves, Tomte, Wandering_Line, 70HZLINE, CROWN, HIGHFREQUENCYBURST, LOWFREQUENCYBLIP, PIZZICATO 7767 Level 6 27 No new glitch classes Description of data fields Classification_id: a unique identifier for the classification. A volunteer may choose multiple classes for a glitch when classifying, in which case there will be multiple rows with the same classification_id. Subject_id: a unique identifier for the glitch being classified. This field can be used to join the classification to data about the glitch from the prior data release.  User_hash: an anonymized identifier for the user making the classification or for anonymous users an identifier that can be used to track the user within a session but which may not persist across sessions.  Anonymous_user: True if the classification was made by a non-logged in user.  Workflow: The Gravity Spy workflow in which the classification was made.  Workflow_version: The version of the workflow. Timestamp: Timestamp for the classification.  Classification: Glitch class selected by the volunteer.  Related datasets For machine learning classifications on all glitches in O1, O2, O3a, and O3b, please see Gravity Spy Machine Learning Classifications on Zenodo For classifications of glitches combining machine learning and volunteer classifications, please see Gravity Spy Volunteer Classifications of LIGO Glitches from Observing Runs O1, O2, O3a, and O3b. For the training set used in Gravity Spy machine learning algorithms, please see Gravity Spy Training Set on Zenodo. For detailed information on the training set used for the original Gravity Spy machine learning paper, please see Machine learning for Gravity Spy: Glitch classification and dataset on Zenodo. 
    more » « less
  5. Abstract Optical coherence tomography (OCT) is an optical technique which allows for volumetric visualization of the internal structures of translucent materials. Additional information can be gained by measuring the rate of signal attenuation in depth. Techniques have been developed to estimate the rate of attenuation on a voxel by voxel basis. This depth resolved attenuation analysis gives insight into tissue structure and organization in a spatially resolved way. However, the presence of speckle in the OCT measurement causes the attenuation coefficient image to contain unrealistic fluctuations and makes the reliability of these images at the voxel level poor. While the distribution of speckle in OCT images has appeared in literature, the resulting voxelwise corruption of the attenuation analysis has not. In this work, the estimated depth resolved attenuation coefficient from OCT data with speckle is shown to be approximately exponentially distributed. After this, a prior distribution for the depth resolved attenuation coefficient is derived for a simple system using statistical mechanics. Finally, given a set of depth resolved estimates which were made from OCT data in the presence of speckle, a posterior probability distribution for the true voxelwise attenuation coefficient is derived and a Bayesian voxelwise estimator for the coefficient is given. These results are demonstrated in simulation and validated experimentally. 
    more » « less